Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add filters

Database
Language
Document Type
Year range
1.
Journal of Electronic Imaging ; 32(2), 2023.
Article in English | Scopus | ID: covidwho-2321319

ABSTRACT

Computed tomography (CT) image-based medical recognition is extensively used for COVID recognition as it improves recognition and scanning rate. A method for intelligent compression and recognition system-based vision computing for CT COVID (ICRS-VC-COVID) was developed. The proposed system first preprocesses lung CT COVID images. Segmentation is then used to split the image into two regions: nonregion of interest (NROI) with fractal lossy compression and region of interest with context tree weighting lossless. Subsequently, a fast discrete curvelet transform (FDCT) is applied. Finally, vector quantization is implemented through the encoder, channel, and decoder. Two experiments were conducted to test the proposed ICRS-VC-COVID. The first evaluated the segmentation compression, FDCT, wavelet transform, and discrete curvelet transform (DCT). The second evaluated the FDCT, wavelet transform, and DCT with segmentation. It demonstrates a significant improvement in performance parameters, such as mean square error, peak signal-to-noise ratio, and compression ratio. At similar computational complexity, the proposed ICRS-VC-COVID is superior to some existing techniques. Moreover, at the same bit rate, it significantly improves the quality of the image. Thus, the proposed method can enable lung CT COVID images to be applied for disease recognition with low computational power and space. © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 International License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI. [DOI: 10.1117/1.JEI.32.2.021404] © 2023 SPIE. All rights reserved.

2.
IEEE Access ; 11:16621-16630, 2023.
Article in English | Scopus | ID: covidwho-2281059

ABSTRACT

Medical image segmentation is a crucial way to assist doctors in the accurate diagnosis of diseases. However, the accuracy of medical image segmentation needs further improvement due to the problems of many noisy medical images and the high similarity between background and target regions. The current mainstream image segmentation networks, such as TransUnet, have achieved accurate image segmentation. Still, the encoders of such segmentation networks do not consider the local connection between adjacent chunks and lack the interaction of inter-channel information during the upsampling of the decoder. To address the above problems, this paper proposed a dual-encoder image segmentation network, including HarDNet68 and Transformer branch, which can extract the local features and global feature information of the input image, allowing the segmentation network to learn more image information, thus improving the effectiveness and accuracy of medical segmentation. In this paper, to realize the fusion of image feature information of different dimensions in two stages of encoding and decoding, we propose a feature adaptation fusion module to fuse the channel information of multi-level features and realize the information interaction between channels, and then improve the segmentation network accuracy. The experimental results on CVC-ClinicDB, ETIS-Larib, and COVID-19 CT datasets show that the proposed model performs better in four evaluation metrics, Dice, Iou, Prec, and Sens, and achieves better segmentation results in both internal filling and edge prediction of medical images. Accurate medical image segmentation can assist doctors in making a critical diagnosis of cancerous regions in advance, ensure cancer patients receive timely targeted treatment, and improve their survival quality. © 2013 IEEE.

SELECTION OF CITATIONS
SEARCH DETAIL